Search Results: "Patrick Schoenfeld"

30 November 2011

Patrick Schoenfeld: LDAP performance is poor..

Todays rant of the day:In a popular LDAP directory management tool, not to be named, there is a message indicating that the performance of the LDAP server is poor. While this might still be true: Honestly, building LDAP filters like you and then complaining about the LDAP server is like, lets say, searching papers in the whole city, while you know they are certainly located within a single drawer, in a single closet, in a single room of your apartment and blaming the city council because your search took so damn long.What a mockery.

17 September 2011

Patrick Schoenfeld: Struggling with Advanced Format during a LVM to RAID migration

Recently I decided to invest in another harddisk for my atom system. That system, I built up almost two years ago, has become the central system in my home network, serving as a fileserver to host my personal data, some git repositories etc., streaming server and since I switched to a cable internet connection it also serves as a router/firewall.
Originally, I bought that disk to backup some data, of the systems in the network, but I realized that all data on this system were hosted on a single 320GB 2,5" disk and it became clear to me, that, in absense of a proper backup strategy, I at least should provide some redundancy.


So I decided, once the disk was in place, that the whole system should move to a RAID1 over the two disks. Basically this is not that hard as it may seem at a first glance, but I had some problems due to a new sector size in some recent harddisks, which is called Advanced Format.


But lets begin from the start. The basic idea of such a migration is:
  1. Install mdadm with apt-get. Make sure to answer 'all' to the question which devices need to be activated in order to boot the system.
  2. Partition the new disk (almost) identical.
    Because the new drive is somewhat bigger that wouldn't make sense, but at least the two partitions which should be mirrored on the second disk, need to be identical.
    Usually this is achieved easily by using
    sfdisk -d /dev/sda sfdisk /dev/second/sdb

    In this case, it wasn't that easy. But I will come to that in a minute.
  3. Change the type of the partitions to 'FD' (Linux RAID autodetect) with fdisk
  4. Erase evidence of an eventual old RAID from the partitions, which is probably pointless on a brand-new disk, but we want to be sure:
    mdadm --zero-superblock /dev/sdb1
    mdadm --zero-superblock /dev/sdb2
  5. Create two DEGRADED raid1 arrays from the partitions:
    mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 missing
    mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb2 missing
  6. Create filesystem on the first raid device, which will become /boot.
  7. Mount that filesystem somewhere temporary and move the contents of /boot to it:
    mount /dev/md0 /mnt/somewhere
  8. Unmount /boot, edit fstab to mount /boot from /dev/md0 and re-mount /boot (from md0)
  9. Create mdadm configuration with mdadm and append it to /etc/mdadm/mdadm.conf:
    mdadm --examine --scan >> /etc/mdadm/mdadm.conf
  10. Update the initramfs and grub (no manual modification needed with grub2 on my system)
    and install grub into the MBR of the second disk.

    update-initramfs -u
    update-grub
    grub-install /dev/sdb
  11. The first point to pray: Reboot the system to verify it can boot from the new /boot.
  12. Create a physical volume on /dev/md1:
    pvcreate /dev/md1
  13. Extend the volume group to contain that device:
    vgextend <volgroup_name> /dev/md1
  14. Move the whole volume group physically from the first disk to the degraded RAID:
    vgmove <volgroup_name> /dev/md1

    (Wait for it to complete... takes some time ;)
  15. Reduce first disk from the VG:
    vgreduce <volgroup_name> /dev/sda2
  16. Prepare it for addition to the RAID (see step 3 and 4) and add it:
    mdadm --add /dev/md0 /dev/sda1
    mdadm --add /dev/md1 /dev/sda2
  17. Hooray! Watch into /proc/mdstat. You should see that the RAID is recovering.
  18. When recovery is finished pray another time and hope that system is still booting with it running from the RAID entirely. If it does: Finished :-)

Now to the problem with the advanced format:
There is some action taking place with the hardware vendors to move to a new sector size. Physically my new device has a size of 4096 bytes per sector. Somewhat different to the 512 bytes disks used to have the last decade.


Logically it still has 512 bytes per sector. As far as I understand this is achieved by placing 8 logical sectors into one physical sector, so when partitioning a new disk the alignment of the disk has to be so that partitions start in a sector which is a multiple of 8.

That, obviously, wasn't the case with the old partitioning on my first disk. So I had to manually create partitions by specifying start points manually and making sure they are dividable by 8.

Otherwise fdisk would complain about the layout on the disk.
This does not work with cfdisk, because it does not accept manual alignment parameters and unfortunately the partitions it creates do have a wrong alignment. So good old fdisk and some calculations how many sectors are needed and where to start, to the rescue.

So the layout is now:
Device Boot Start End Blocks Id System
/dev/sdb1 2048 291154 144553+ fd Linux raid autodetect
/dev/sdb2 291160 625139334 312424087+ fd Linux raid autodetect


28 April 2011

Patrick Schoenfeld: On Debian discussions

In my article "Last time I've used network-manager" I made a claim for which I've been criticized by some people, including Stefano, our current (and just re-elected) DPL. I said that a certain pattern, which showed up in a certain thread, were a prototype for discussions in the Debian surroundings.

Actually I have to commit, that this was a very generalizing statement, making my own point against the discussion point back directly at myself.
Because as Stefano said correctly there has been some progress in the Debian discussion cult.
Indeed, there are examples of threads, were discussions followed another scheme.
But to my own advocacy I have to say that such changes are like little plants (in the botanical sense). They take their time to grow and as long as they are so very new, they are very vulnerable to all small interruptions. Regardless of how tiny those interruptions may seem.

I've been following Debian discussions for 6 or 7 years. That scheme I was describing was that which had the most visibility of all Debian discussions. Almost every discussion which were important for a broader audience followed that scheme. It has a reason that Debian is famous for flamewars.
In a way its quiet similar to the network-manager perception, some people have. Negative impressions manifest themselves. Especially if they have years of time.
Positive impressions does not have a chance to manifest themselves as long as the progress is not visible enough to survive small interruptions.

I hope that I didn't cause to much damage with my comment, which got cited (context-less) on other sites. Hopefully the Debian discussion cult will improve further to a point where there is no difference between the examples of very good, constructive discussions we already have in some parts of the project and the project-wide decision-making-discussions which affect a broad audience and often led to flamewars.

Patrick Schoenfeld: Directory-dependent shell configuration with zsh (Update)

For a while I've been struggling with a little itch. I'm using my company notebook for company work and for Debian related stuff. Now, whenever I switch between those two contexts, I had to manually fix the environment configuration. This is mostly related to environment variables, because tools like dch et cetera rely on some, which need to be different for the different contexts, like DEBEMAIL.
A while ago I had the idea to use directory dependent configuration for that purpose, but I never found the time and mood to actually fix my itch.
Somewhere in the meanwhile I applied a quick hack ("case $PWD in ...; do export...; esac") to my zsh configuration to ease the pain, but it still did not feel right.


For the impatient: Below you find a way to just use whats described here. The rest of the article just contains detailed information on how to implement something like this.


The other day I were cleaning up and extending my zsh configuration and it came to my mind again. I then thought about what my requirements are and how I could solve it. First I thought about using a ready solution, like the one in the Grml Zsh configuration, but at that point I did not remember it (it needed a hint by a co-worker *after* I finished the first version of my solution). Then I came up with my requirements:

It lead to a configuration approach as a first start. When thinking about how to represent the configuration I looked into the supported data types in zsh. zsh supports arrays, which is perfect for my need. I came up with something like that:
typeset -A EMAILS ENV_DEBIAN ENV_COMPANY
EMAILS=(
"private" "foo@bar.org"
"company" "baz@foo.org"
"debian" "schoenfeld@debian.org"
)
ENV_DEBIAN=(
"DEBEMAIL" "$EMAILS[debian]"
"EMAIL" "$EMAILS[debian]"
)
ENV_COMPANY=(
"DEBEMAIL" "$EMAILS[company]"
)
The next part was selecting the right profile. In the first version I used the old case logic, but it was breaking my separate logic and configuration paradigm. Approximate at this point the co-worker lead me to the grml approach, which I borrowed an idea from:


# Configure profile mappings
zstyle ':chpwd:profiles:*company*' profile company
zstyle ':chpwd:profiles:*debian*' profile debian

and the following code to lookup profile based on $PWD:

1 function detect_env_profile
2 local profile
3 zstyle -s ":chpwd:profiles:$ PWD " profile profile profile='default'
4 profile=$ (U)profile
5 if [ "$profile" != "$ENV_PROFILE" ]; then
6 print "Switching to profile: $profile"
7 fi
8 ENV_PROFILE="$profile"
9

For an explanation: zstyle is a zsh-builtin which is used to "define and lookup styles", as the manpage says, or put different: Another way to store and lookup configuration values.
Its nice for my purpose, because it allows storing patterns instead of plain configuration values which can be compared against $PWD easily with all of the zsh globbing magic. This is basically whats done in line 3. zstyle then sets $profile to the matching zstyle configuration in the :chpwd:profiles: context or to 'default' if no matching zstyle is found.

The (almost) last part is putting it together with code to switch the profile:

1 function switch_environment_profiles
2 detect_env_profile
3 config_key="ENV_$ENV_PROFILE"
4 for key value in $ (kvP)config_key ; do
5 export $key=$value
6 done
7
The only non-obvious part in this are lines 3 and 4. Remember, the profiles were defined as ENV_PROFILE, where PROFILE is the name of the profile. We cannot know that key in advance, therefore we have to construct the right environment variable from the result of detect_env_profile. We do that in line 3 and lookup this environment variable in line 4.
The deciding aspect for that is the P-flag in the parameter expansion. It tells zsh that we do not want the value of $config_key, but instead the value of $WHATEVER_CONFIG_KEY_EXPANDS_TO.
The other flags k and v tell zsh that, from the array, we want both: keys and values. If we'd omitted those flags it would have given us the values only.
We then loop over that to configure the environment. Easy, hu?

We would be finished, if this would do anything. The code above needs to be called. Lucky for us thats pretty easy to achieve, as zsh has a hook for when a directory is changed. Making all this work is simply a matter of adding something like this:

function chpwd()
switch_environment_profiles

Now, one could say, that the solution in the grml configuration has an advantage. It allows calling arbitrary commands on profile changing, which might be useful to *unset* variables in a given profile or whatever you could think of.
Well, its a matter of three lines to extend the above code for that feature:


# Taken from grml zshrc, allow chpwd_profile_functions()
if (( $ +functions[chpwd_profile_$ENV_PROFILE] )) ; then
chpwd_profile_$ ENV_PROFILE
fi

to the end of switch_environment_profiles and now its possible to additionally add a function chpwd_profile_PROFILE which is called whenever the profile is changed to that profile.


USAGE: I have put the functions into a file which can be included into your zsh configuration, which can be found on github.
Please see this README and the comments in the file itself on further usage instructions.

19 April 2011

Patrick Schoenfeld: password-gorilla 1.5.3.4 ACCEPTED into unstable

The password-gorilla package has lacked some love since a while and at some point in time I orphaned it.
That happened due to the fact, that the upstream author was pretty unresponsive and inactive and my own TCL skills are very limited. As a result password-gorilla package was in a bad state, at least from a user point of view, with several (apparently) random happening error message and alike, stalling feature development etc.

But in the meanwhile there was a promising event arising. A guy, named Zbigniew Diaczyszyn, wrote me a mail that he intended to continue upstream development. Well, meanwhile is kind of an understatement. That first mail already happened in December 2009. And he asked me, if I'd like to continue maintaining password-gorilla in Debian. I agreed to that, but as promising as it sounded to have a new upstream, I was not sure if that would work out. However: My doubt were not justified.

In the time between 2009 and now Zbigniew managed to become the official upstream (with the accreditation of the previous upstream), create a github project for it and make several releases.


I know there are several people out there who tested password-gorilla. I know there were magazine reviews including the old version, which were a bit buggy with recent tcl/tk versions. It made a quiet good multi-platform password manager, with support for very common password file formats, stand in a bad light.
I recommend previous users of password-gorilla to try the new version, which recently has been
uploaded to unstable.

15 April 2011

Patrick Schoenfeld: Last time I've used network-manager..

Theres an ongoing thread on the Debian mailing lists about making network-manager installed by default on new Debian installations. I won't say much about the thread. Its just a prototype example for Debian project discussions: Discuss everything to death and if its dead discuss a little more. And - very important - always restate the same arguments as often as you can. Or if its not your own argument you restate, restate the arguments of others. Ending with 100 times stated the same argument. Even if its already disproved.

I don't have a strong opinion about the topic in itself. However there is something I find kinda funny. A statement brought up by the people who strongly oppose network-manager as a default.
A statetement I've heard so often that I can't count it anymore.

The last time I've tried network-manager it sucked.
It often comes in different masquerades, like:

But it basically boils down to that basic essence of the sentence I've written above. Sometimes I ask people who express this opinion a simple question:

When did you test network-manager the last time?
The answers are different but again the basic essence of the answers is mostly the same (even if people would never say it that way):

A long time ago. Must have been around Etch.
And guess what: There was a time when I had a similar opinion. Must have been around Etch.
During the life cycle of network-manager between Etch and now a lot has happened. I restarted using network-manager at some point of the Lenny development.
My daily driver for the management of my network connections on my notebook. Yes, together with ifupdown because, yes, network-manager does not support every possible network-setup with all of the special cases possible. But it supports auto-configuring of wired and wireless devices. Connecting to a new encrypted network, either in a WLAN or in a 802.1x LAN, using UMTS devices, using tethering with a smart phone. And everything: on a few mouse-clicks.

Yes, it had some rough edges in that life cycle. Yes, it had that nasty upgrade bug, which was very annoying.
But face it: It developed a lot. Here are some numbers:

Diffstat between the etch version and the lenny version:
362 files changed, 36589 insertions(+), 36684 deletions(-)

Diffstat between the Lenny version and the current version in sid:
763 files changed, 112713 insertions(+), 56361 deletions(-)

The upgrade bug has been solved recently. Late. But better late then never.

So what does that mean? It means that, if your last network-manager experience was made with Lenny or even worse around Etch, you should better give it another try, if you are interested in knowing what you talk about. For now it seems that a lot of people do not know. Not even in a distance.

25 February 2011

Patrick Schoenfeld: Let me introduce DPKG::Log and dpkg-report

We have customers, which require a report about what we've done during maintenance windows. Usually this includes a report about upgrades, newly installed packages etc. and obviously everything we've done apart from that.
Till now we've prepared them manually. For a greater bunch of systems this is a big PITA, because to be somehow useful you have to collect the data for all systems and after that, prepare a report where you have:

Its also error-prone because humans make mistakes.

Perl to the rescue!
At least the part about generating a report about installed/upgraded/removed packages could be automated, because dpkg writes a well-formed logfile in /var/log/dpkg.log. But I noticed that there appearently is no library specialised at parsing that file. Its not a big deal, because the format of that file is really simple, but a proper library would be nice anyway.
And so I wrote such a library.
It basically takes a logfile, reads it line by line and stores each line parameterized into a generic object.

Its features include:

Based on that, I wrote another library DPKG::Log::Analyse, which takes a log, parses it with DPKG::Log and then extracts the more relevant information such as installed packages, upgraded packages, removed packages etc.

This, in turn, features:
- Info about newly installed packages
- Info about removed packages
- Info about upgraded packages
- Info about packages which got installed and removed again
- Info about packages which stayed halfinstalled or halfconfigured at the end of the logfile (or defined reporting period)


These libraries are already uploaded to CPAN and packaged for Debian.
They passed the NEW queue very quickly and are therefore available for Sid:

http://packages.debian.org/sid/libdpkg-log-perl

As an example use (and for my own use case, as stated above), I wrote dpkg-report, which uses the module and a Template::Toolkit based template to generate a report about what happened in the given logfile.
It currently misses some documentation, but it works somehow like this:

Report for a single host over the full log:
dpkg-report

Report for a single host for the last two days:
dpkg-report --last-two-days

Report for multiple hosts (and logfiles):
The script expects that each log file has the name <systemname>.dpkg.log so that it can guess the hostname from the system and can grab all such log files from a directory if a directory is specified as log-file arguments:</systemname>

dpkg-report --log-file /path/to/logs

This will generate a report about all systems without any merging.

Report for multiple hosts with merging:
dpkg-report --log-file /path/to/logs --merge

This will do the following:
A (fictive) report could look some what like this:

dpkg-Report for all:
------------------------
Newly Installed:
abc
Removed:
foo
Upgraded:
bar (0.99-12 - 1.00-1)

dpkg-Report for test*:
------------------------
Newly Installed:
blub

dpkg-Report for test1:
------------------------
Upgraded:
baz (1.0.0-1 -> 1.0.0-2)

dpkg-Report for test2:
------------------------
Upgraded:
zab (0.9.7 -> 0.9.8)

Currently this report generator is only included in the source package or in the git(hub) repository of the library. I wonder if it makes sense to let the source package build another binary package for it.
But its only a 238 lines perl script with a dependency on the perl library so I'm unsure if it warrants a new binary package. What do others think?

7 October 2010

Patrick Schoenfeld: FAI and custom packages

In the last two cases where I created FAI configurations for different systems I needed to install packages which are not part of the Debian repositories.
Unfortunately FAI does not provide a standard way to supply the installation with custom packages so I need to figure out a way on my own.

I have the following requirements in the solution:
It would be easy if a repository reachable by the installed system would already exist. Unfortunately it does not. But.. aha! So here is the idea:

  1. In the FAI configuration space add a new directory
    'packages'
  2. We will use reprepro for the repositiory. You should install it where ever you prepare the configuration space. It would also be an option to install it into the NFSROOT and run it from there, but I decided to pre-prepare it (but this would be a clean solution if one of the FAI developers decided to implement a standard way, I guess).
  3. Create a directory packages/conf and therein a file distribution with this content:
    Origin: Debian
    Label: Debian-All
    Suite: stable
    Codename: lenny
    Version: 5.0
    Architectures: i386 amd64
    Components: main
    Description: Debian Lenny
    Obviously you should adapt at least the Architectures line to whatever you need. Otherwise adapt what you need.
  4. Now you can add packages, with reprepro includedeb lenny /path/to/debfile from within that directory, so cd to it, before.
    The result is, that the packages directory becomes a full-fledged Debian APT repository which can be used with apt by creating a line similar to the following:
    deb file:/var/lib/fai/config/packages lenny main contrib non-free
  5. And thats what we'll do. In order to make this happen before any software gets installed, we create a hook. It needs to be called instsoft.CLASS where CLASS is a class that is used for the fai install we are working on. It basically needs to contain something like (+ the usual shell script stuff):

    $ROOTCMD ainsl /etc/apt/sources.list "deb file:/var/lib/fai/config/packages lenny main contrib non-free"
    $ROOTCMD apt-get update
    This should add the repository the sources.list in the /target and run apt-get update.
Thats it. This should be all to make the repo available to the usual installation. That means if your package_config files contain packages which are only part of the custom repo they will now be able to be installed.

(This might probably not be a copy and paste howto. It just describes the rough steps needed to do and might contain errors. So don't blame me if it doesn't work 100% this way. ;-)

27 September 2010

Patrick Schoenfeld: FAI, my notebook and me

I use to take my (company) notebook with me on business travels.
Two times I now had the unlucky situation that something bad happened to it on such an occassion. Whenever you get in the situation that you need to reinstall your system in a hotel room you'll might have the same wish that I got: A way to quickly bring the system in a state where I could work with it.

Well, I used FAI a while back for a customer. Its a real great tool for automated installations and I really prefer it over debian-installer preseeding. Apart from the fact that the partitioning is way easier it also gives me the power to complete the whole installation up to a point where I've got almost nothing to do anymore. It also features an installation completely from CD or USB-Stick which makes it suitable for me.

However, my notebook installation has a little "caveat" which made that a little bit more harder as previously thought. As it is a notebook and I carry company data on it it has to be encrypted. Disk encryption at a whole.
The stable FAI version does not support this.
The problem is: The current support for crypto in setup-storage (FAIs disk setup tool) is not very far. Supported is the creating of a LUKS container with a keyfile, saving this keyfile to the FAI $LOGDIR and creating a crypttab.
Unfortunately for a root filesystem this would leave us with an unbootable system, because this requires manual interaction. And on the other hand using a keyfile for a cryptoroot is a no-go anyway. We want a passphrase.
On a side-note: cryptoroot support with a keyfile is more complex than with a passphrase, as you have to provide a script that knows how to get to the key.

So I started experiments with scripts in the FAI-configuration that added a passphrase, changed the crypttab and recreated the crypttab. That worked, although it was very ugly.
But due to a good coorperation with Michael Tautschnig, a FAI- and Debian-Developer, on this, the FAI experimental version 4.0~beta2+experimental18
now supports LUKS-volumes with a passphrase that can be specified in the disk_config.

Now its actually possible to setup a system like mine with FAI out-of-the-box. One thing (apart from the FAI configuration and setup as you want and need it) has to be done, anyway:
The initrd support of cryptsetup requires busybox (otherwise you will see a lot of "command not found" errors and you system won't boot) and it requires initramfs-tools, which is standard nowadays.
So you have to make sure that these packages are in your package config!

So now I can define a FAI-profile for my notebook, create a partial fai mirror with the packages it needs and put all this together on an USB stick with fai-cd (don't worry about the name, it can be used to create ISO images as well). I can carry this with me and if I need it I stick it into my notebook and let FAI automatically reinstall my system. Nice :)

Update: Somebody asked me, weither he understood me right, that I'd put my LUKS passphrase on a FAI usbstick clear-text. Obviously, the answer is and should be NO. What I do and what I'd suggest to others: Use a default passphrase in the FAI configuration, install with it - after all on a fresh installation there is not much to protect - and once it is finished *change* the passphrase to something secure by adding a new keyslot and removing the old.

23 August 2010

Patrick Schoenfeld: Sorry for Planet Spamming

Bah, sorry, I've just noticed that Planet was spammed with an old post, because of me changing the font to the default (which appearently also didn't work, lol).
Sorry for that.

Patrick Schoenfeld: Why do I use facebook and co?

Recently I blogged a serious complain about Facebook marketing practices.
I must note, that, while writing this, this is still a standing fact.
And it somehow leads me to the question why I use facebook (and similar portals like Xing or Meinvz at all) at all.

I would say that I'm a person, who values security. Probably I need to, because my job involves security aspects, even if its only to protect data.
Additionally I value privacy. I would never go onto a toilet in a glass case, where everybody could watch me doing my business.
The idea to keep the door to my flat open all the time would trigger all alarm bells inside of me.
But on the other hand I have a blog on which I write about technical topics, and personal topics. Like the death of my grandma or how my neighbours disturb me. And I use Facebook. And MeinVZ.

I'm well aware of how this makes it easier to trace me. How this could be used against me, if I happen to search for a job. Aware of the everlasting memorization of the internet who will keep this.. eventually forever.
And I have friends who have a serious antipathy against this and probably would never use such a portal at all.
So why do I use it?

That I started using that portals were some kind of peer pressure. I'm a person who likes beeing "thereby" and I like beeing present. Many of my friends were already using those portals and so I followed an invitation to use it, too. But I did think about several points:

I came to the conclusion that there are pros and cons for and against using these services. And I found that I can live with the cons due to the convenience they bring to me. And that I could always stop using them.

However: That does not mean that I have to "Eat or die" what they serve.

Oh and on a side note: I found "Facebook, Br ste und die Currywurst" to be an interesting article about the topic. Its a recommended reading, although only for the people who understand German.

Patrick Schoenfeld: Gnomad = Gnome + Xmonad

Since I started using Linux I've used several window managers. I felt used to blackbox and fluxbox and eventually used enlightenment and some others in the past, but nowadays its been a while since I became the user of an Desktop Environment. I'm using GNOME because it provides what I need and I don't need to spend an hour to configure it before it suits my needs. After all I'm lazy.
With respect to the fact that I'm quiet satisfied with GNOME, there is one feature I were always missing.
Because I spend much time with tiling and arranging windows on my desktops I noticed that I could need
a tiling feature. Something which was already a feature back in Windows 3.11.
GNOME/metacity does not have this features and given that a wishlist bug about this is open since almost
7 years
its unlikley that this will ever change. There are separate tools, which I recently learned about that can assist me with this. For example the perl script 'wumwum'. But this seems to be the wrong solution to a real problem. Additional wumwum does not work properly with metacity and so I'd need to another WM anyway, which lead to the point where I started thinking about integrating a true tiling wm into GNOME... once again.

First, I looked into awesome, which is a window manager I used some time ago.
But documentation about configuring it is basically an API documentation, with no obvious entry point.
It seems to be the best to study the whole API just to set some simple settings (e.g. a padding for the GNOME panel and some always-floating applications). I even thought about learning LUA, because it seems like a language which is quick and easy to learn, but honestly if I need to study a programming language and a whole API documentation just to configure a window manager then IMHO there is something conceptionally wrong with that piece of software.
After all I came to Xmonad. This window manager is using Haskell and I fear I need to learn this language as well if I want to configure weird things. But the wanted scenario is well documented and documentation for more common configuration settings exists all over the place so that I don't
really feel inclined to learn more then needed.
Remember? I'm lazy.

Now I'm feeling quiet happy with this combination. It didn't cost me much time to get used to the most basic keyboard shortcuts or setting the whole thing up. GNOME and Xmonad work together like a dream team. I feel more productive now. As an additional plus I reinstalled the vimperator firefox plugin, because with my new desktop environment I more often use the keyboard for ordinary tasks like switching between apps or desks and I felt beeing able to quickly operate firefox with the keyboard, too, would be a plus. Well, it is.

19 August 2010

Patrick Schoenfeld: Getting the mp3 mess into control

I have some mp3 files in my collection. Some years ago, back in the times, when I was a Windows-User, I used "The Godfather", to keep the chaos under control. This software, although not Open Source, is really good at what it does. And it does almost everything to organize mp3s. From (mass-)tagging to auto-renaming to auto-sorting your mp3s. But this wouldn't be a blog entry by me if this became... a praise to a windows software so..

When I became a daily Linux User I searched for alternatives. With GUI or without a GUI, but I did not really find an application that suited my needs. As far as tagging and renaming was concerned it wasn't that hard. There are really excellent command line tools (lltag, id3, etc.) and rumours have been heard that there are also GUI tools.
But what I still did not find is a simple, yet flexible, tool to sort a huge collection of mp3s into a flexible structure on the filesystem like "The Godfather" was able to do since.. ehh.. a lot of years.

So the day before yesterday I finally decided to write one on my own and came up with an ~ 280 lines perl script (POD-documentation included) which does exactly what I want and is simple.
It does no tagging.
It does no renaming.
All it does is sorting mp3s into a given template-based directory hierarchy based on their ID3 tag.

At this point a warning is due. I'm sharing that script with you under the terms of the GPL. BUT it still needs testing and therefore you probably do not want to use it without its safety measures (e.g. dry-run or copy instead of move) to avoid data loss. And notice that although it has been written with portability in mind, it has only been tested on a Debian GNU/Linux system.

The script is hosted at github. Here (or raw to download it directly).

After I've talked to a co-worker about the script, he told me that arename would do what I want. So you probably don't want to use my script, because arename is probably more tested and much more sophisticated. But on the other hand I had a quick look at the arename manpage and its so utterly feature-loaded, that it cannot avoid a certain complexicity. My tool is simple.
And there is another advantage. If you want arename to handle a certain amount of mp3s you have to use your shell magic to find the files and pipe it to arename. My script finds all mp3s recursively from where you let it start (default is $PWD so be careful) and will happily move it into a hierarchy under a directory you ask it to. For the simple job of sorting mp3s its probably easier to use.

(Oh and if its worth nothing, it might still be of use as a simple programming example, how one could solve this problem in Perl)

16 July 2010

Patrick Schoenfeld: Facebook aggressively advertising dubious features

Yesterday some confusion arised on my side when I saw a new facebook advertising campaign in my facebook account (yes, I am a member of facebook, although I'm aware of the privacy concerns). Basically it was saying that I should try the friends finder and that some of my friends (showing and naming me three of them) would have used it already.

Some background:
The friends finder feature of facebook is a feature that asks for the password of your e-mail account. It will then crawl through your emails to find contacts that might already be on facebook but not connected to you (in a facebook sense).

My first feeling was: Oh my god. How can it be that friends (and family) of mine are so naive? Especially since there were people included which I consider to be quiet clever. But honestly: Who would be so naive to give an unknown company direct unsupervised (you can't tell what they really do) access to your mail account? Would you give it to your friend? Your husband? Your father? I guess the answer will be "No" in the most cases and these people are most likely people you trust. Well, I know, you could state similar things for Googlemail who crawl your mails to show you personalized advertising. And in fact you are right. But if one decides to use gmail for your email hosting you have to trust them anyway. Like you have to trust anybody else you retain to host your mails (who could do the same but just don't tell you). But in this case its a third party, Facebook.

But if you now think that this becomes a rant against my friends: You are wrong. Over the day I found out that it basically shows up all my friends in a rotation. Every time I open the start page it randomly picks three people from my friend list and shows them to me, telling the same lie to me all over again.

So what do we have here? Facebook tries to advertise the most dubious feature they have in the most aggressive way one could imagine. By pretending wrong facts. Wouldn't that even be an element of crime in Germany ("Irref hrung" 5 UWG or maybe also 4 UWG, "unsachliche Beeinflussung")?

Facebook, you can do better.

Update: Stefano raised a good point. I didn't actually make clear that it has been verified that those people actually did not use the "feature":

1. I asked some of them. They said, they didn't use it and were in fact surprised that I asked.
2. Others told me they saw that ad stating that I used the feature. I definitely did not use this "feature".
3. Probably weak: IMHO its highly unlikely that all of my friends used that feature and at the point I've written this it had already shown my whole friend list.

7 April 2010

Patrick Schoenfeld: Ubuntu considering critical bugs an "invalid" bug?

I just discovered that bug report over at Ubuntu.
Short summary:
They have a script in upstart which is not meant to be run manually and if you do it will erase your whole file system. Additionally it seems that the fact that you shall not run that script is not communicated anywhere.

That alone isn't the most spectacular about it. Bugs happen. Whats spectacular about it is how a Canonical employee and member of the TechBoard (for people who don't know it: The people who decide about the technical direction Ubuntu takes) handles that bug. One quote of him to reflect it all:
Sorry, the only response here is "Don't Do That Then"

So what we have here is a classical case of bad programming. The problem in question is that the script expects a certain environment variable to be set. Fair enough. However it does not check if its set at all and instead of failing or using a sensible default it simply sticks to undefined behaviour. What we have here is a classical programming mistake every beginner tends to do. People who start programming often forget (or don't know) that every external value we rely on must be considered untrustworthy. Therefore a good practice is to check those values.In this case someone decided that this is useless because they suffer from the wrong assumption that nobody ever calls it manually and the other wrong assumption that caller of the scriptwill always set the environment variable correctly. This is a double-fail.
Now the developer in question does not accept that (someone else indicated why the behaviour of the script is dangerous), he simply says that the bug is invalid. Thats really a pity.

27 January 2010

Patrick Schoenfeld: Debian translations - EPIC LOL

As a german I'm used to strange translations in computer context.
I saw it back when I was using Microsoft products and I regulary stumble upon it on Debian systems. But whats actually kind of funny:Debian is outstanding in that regard.
An example:
Debian has that "wonderful" package manpages-de which provides a german manpage for ps.
It contains a formulation that is so inaccurate and not really funny:

"--cumulative Daten von toten Kindern einbeziehen (als Summe zusammen mit den Eltern)"


For the English only speakers: That can be roughly translated to "Collect data from dead children
(summed with their parents)". For me this somehow sounds like we are a butcher OS and so I reported #495441 but nobody cared about it yet.

Today I had another WTF on german translations because when I wanted to upgrade my system I read the following sentence:

"Sind Sie sich sicher, dass Sie die oben genannten Pakete installieren bzw. aufr sten wollen?"


Cold war, anyone?

Well its a classical case of using a word-by-word translation instead of a meaning-orientied translation. Its a fact that 'upgrade' can be translated to 'aufr sten' (as done above) but in that case its just not appropriate. As a matter of fact most german people I'm aware of would actually confuse the meaning with a meaning such as in a military context (therefore the "Cold war, anyone?" text above). And just btw. if we would rely on computing power to translate it back into English we would get:

Are you sure itself that you want to install and/or rig the packages specified above?


So it seems that Babelfish has a similar pereception.

23 November 2009

Patrick Schoenfeld: Curiosity of code

Just some random curiosities, I've recently seen in PHP code.
if (TRUE)
...


or maybe:
if (...)
...

else
// see Code below

An evergreen:

function _copy (...)
...


/* Workaround function */
function copy (...)
...

22 November 2009

Patrick Schoenfeld: Rest in Peace, grandma.

A few weeks ago I've visited my grandma, because she and my grandpa had birthday.
We also did it, because we figured that it might be the last chance to see her alive.
It was a hard time for us. Because she wasn't the person I knew, the person with whom I argued so often when I was a young boy, the person who showed me the beauty of cooking. She were lying in her bed, barely realizing us, not really able to do conversation. As soon as we went away from her bed she was calling for one of us, repeating the name of the person she was calling for every few seconds. Most of the time she called for my grandpa, but another time it was me she was calling. I then stood there, next to her bed, not really able to talk, because she wasn't and I didn't know what to say.
And then I had a talk to my grandpa. Although he looked visibly aged, he still seemed to be in a good constitution. But while talking to him I realized that he was tired. He told me that his woman was asking him eight times a day which day it was and such things. He told me and my cousin that he fears the day when she is gone. It was hard. I mean.. I knew it, I knew that it would break his heart when she's gone, but in my memorization he has never been a guy who would have spoken it out. I know that both had good and bad times together. Their was always that joke about them, that they can't do with each other but not without each other either. But now, the very imagination that this might be true, caused frighten in me.
Yesterday I got a call from my mother, telling me, that it seems that she decided that she will go, so she drove to my grandparents. A few hours later she called me again and said to me that it is over. Slept in, in silence. No pulse anymore. But the first thing I replied was: Can you take care for grandpa? Its not that I don't sorrow, but I knew it were better for her. She had a life with ups and downs, she had a man who loved her more then anything else. But she wasn't able to live or enjoy her last days I think. And now I'm afraid about the living, the one whose heart supposedly just broke. I hope he will be able to enjoy his last years anyway. Because he is a good man, has always been and I think he deserves it.

Grandma, we will miss you. Rest in Peace.

27 October 2009

Patrick Schoenfeld: Things that make you a good programmer

If you ever wondered if you are a good programmer (not), you might think about the following points:

1. Repeat yourself. How else would you keep yourself busy, if your customer has new requirements?

2. Re-Using code is for people who take the other side of the street if a big dog walks along. No risk, no fun. How else would you find out that the common idiom you use is really the way the job has to be done?

3. If you have a coding convention (e.g. how code has to be indented): Just ignore it. Its a good thing to have editors go crazy, when trying to automatically detect the indenting of a source file. By always confusing the editor you keep up the fun of the people, who try to change your code. It would be too boring for them, to simply edit the file, without the quiz which quoting applies to your code. Extra points for those who additionaly (to mixing tabs, spaces, 4 and 8 space, expandtab and no expandtab) write a vim modeline into their file that - by guarantee - does not match the indenting of the file.

4. If you can complicate things: Do it. Numeric indices in arrays can be exchanged by arbitrary strings, this makes your code more interesting. Especially if you have to move elements in that array. And complicated codes makes people, who don't know it, think that you are a good programmer. One step nearer to your goal, isn't it?

5. When working with different classes invent a system that auto-loads the classes you need. Don't document it, that would be a risk for your job. Its not neccessary anyway, because good programmers like the riddle and finding out what gets called, where and why, is a simple but entertaining riddle. After all documenting is a bad idea as well. Especially if you make your system open source, because with a good documentation, it might be to easy for competitors to use your code to make money.

6. If you find ways to do extra function calls: Do it. It gives you the chance to refactor your code, if the customer notices that it is too slow. Great opportunity, hu?

(But, seriously: Don't hear on me. Its just a cynical way to express my feelings.)

26 October 2009

Patrick Schoenfeld: Building a 15W Debian GNU/Linux system

When the Intel Atom was revealed to the public I didn't came around to say: "Wow!", because that piece of hardware promises to be a generic-x86 1.6 GHZ CPU with a total power consumption of 2 Watt, which is amazing considering that x86 hardware generally wasn't an option if you wanted to build a low-power system. But then the first chipsets were presented to the public and the Atom became a farce, because you don't want to have a chipset that eats over 25W for a CPU which consumes 2W. That was basically laughable.

Recently I managed to find out that there is a new chipset out there, the Intel i945GSE, which runs at about 11W TDP, including the soldered-on-board N270 atom cpu. And I convinced myself that this could get my new homeserver. So together with a 2.5" drive I could get a system which runs with about 15W maximum power consumption, which is amazing, given that the Arcor Easybox my provider gave to me seems to have similar maximum power consumptions. And it isn't able to provide me with the great flexibility, the new Atom system is.

So I bought the following components:

It took a while to get those components together, especially because I previously decided for an Antec case which I ordered from K&M Elektronik, but as they didn't keep their delivery promise I came to the M350. What a luck.

Running Debian on this machine is the easy part, you would think. This is true, for some exceptions. First: Lenny runs fine. I've installed the notebook hd in my desktop and then put it in the atom, when I got the first hardware and it worked right away. Except of a grub message, which is disturbing and which I didn't manage to fix right now (grub says "Error: No such disk" just to get the menu a seconds later anyway and boot the system flawless).

What didn' t work exactly reliable was the included network chip. Its quiet a shame to say that, but if you buy an Intel board, wouldn't you expect that it would run Intel components? Unfortunately this is not true for the atom board. It has a Realtek RTL8111 network chip, which isn't properly supported by the 2.6.26 kernel (that means the kernel think it is and loads a rtl8169 module, which isn't able to properly detect a link).
The workaround for this is to use a 8168 module from Realtek and compile it for your kernel, but as I equipped this system with an Atheros 2424 PCIe chipset for playing WLAN AP, too, I had to upgrade to 2.6.31 anyway and there the chip is fully supported by 8169.

Making the system an access point has been surprisinly easy as well. The greatest pain was to find a Mini PCIe WLAN card, because after all this isn't very common. However I found one based on an Atheros 2424 chipset and bought it. I additionaly bought an SMA-antenna connector that I could mount into the case (the M350 has a preparation hole for it) and an SMA antenna.
Setting this up, has been fairly easy. You need to know, that running master mode with newer mac-subsystem-drivers in Linux doesn't allow setting master mode directly. Instead you need to use an application to manage everything, which is capable of running cards over netlink. Thats hostapd. The unfortune is, that the lenny version is too old and so I built myself a (hacky) backport of the sid version, which isn't that hard anyway, because rebuilding against Lenny is enough. Additional you need a kernel 2.6.30 with compat-wireless extensions, or an 2.6.31, because previously the ath5k driver didn't support the master mode. After that getting hostapd up is a matter of a 4 - 15 lines configuration file. For me its now running in 802.11g with WPA and a short rekeying interval with 14 lines of configuration.

After all I'm satisfied with the system. Without any fan the CPU constantly runs at 55 C, which is okay, given that it must operate within 0 and 90 C according to the tech specs. The system and the disk are somewhat lower (47 and 39 C). The power of this system is more then enough. Its booting quick and working with it works without latencies, even when the system is doing something. What I haven't yet tested is weither the power consumption actually fulfills the expactations. I will do so, once I got a wattmeter.

Next.

Previous.